330 research outputs found

    Non-paraxial design and fabrication of a compact OAM sorter in the telecom infrared

    Full text link
    A novel optical device is designed and fabricated in order to overcome the limits of the traditional sorter based on log-pol optical transformation for the demultiplexing of optical beams carrying orbital angular momentum (OAM). The proposed configuration simplifies the alignment procedure and significantly improves the compactness and miniaturization level of the optical architecture. Since the device requires to operate beyond the paraxial approximation, a rigorous formulation of transformation optics in the non-paraxial regime has been developed and applied. The sample has been fabricated as 256-level phase-only diffractive optics with high-resolution electron-beam lithography, and tested for the demultiplexing of OAM beams at the telecom wavelength of 1310 nm. The designed sorter can find promising applications in next-generation optical platforms for mode-division multiplexing based on OAM modes both for free-space and multi-mode fiber transmission.Comment: 12 pages, 8 figure

    Towards Distributed Mobile Computing

    Get PDF
    In the latest years, we observed an exponential growth of the market of the mobile devices. In this scenario, it assumes a particular relevance the rate at which mobile devices are replaced. According to the International Telecommunicaton Union in fact, smart-phone owners replace their device every 20 months, on average. The side effect of this trend is to deal with the disposal of an increasing amount of electronic devices which, in many cases, arestill working. We believe that it is feasible to recover such an unexploited computational power. Through a change of paradigm in fact, it is possible to achieve a two-fold objective: 1) extend the mobile devices lifetime, 2) enable a new opportunity to speed up mobile applications. In this paper we aim at providing a survey of state-of-art solutions aim at going in the direction of a Distributed Mobile Computing paradigm. We put in evidence the challenges to be addressed in order to implement this paradigm and we propose some possible future improvements

    Effective runtime resource management using linux control groups with the BarbequeRTRM framework

    Get PDF
    The extremely high technology process reached by silicon manufacturing (smaller than 32nm) has led to production of computational platforms and SoC, featuring a considerable amount of resources. Whereas from one side such multi- and many-core platforms show growing performance capabilities, from the other side they are more and more affected by power, thermal, and reliability issues. Moreover, the increased computational capabilities allows congested usage scenarios with workloads subject to mixed and time-varying requirements. Effective usage of the resources should take into account both the application requirements and resources availability, with an arbiter, namely a resource manager in charge to solve the resource contention among demanding applications. Current operating systems (OS) have only a limited knowledge about application-specific behaviors and their time-varying requirements. Dedicated system interfaces to collect such inputs and forward them to the OS (e.g., its scheduler) are thus an interesting research area that aims at integrating the OS with an ad hoc resource manager. Such a component can exploit efficient low-level OS interfaces and mechanisms to extend its capabilities of controlling tasks and system resources. Because of the specific tasks and timings of a resource manager, this component can be easily and effectively developed as a user-space extension lying in between the OS and the controlled application. This article, which focuses on multicore Linux systems, shows a portable solution to enforce runtime resource management decisions based on the standard control groups framework. A burst and a mixed workload analysis, performed on a multicore-based NUMA platform, have reported some promising results both in terms of performance and power saving

    Probabilistic-WCET Reliability: Statistical Testing of EVT hypotheses

    Get PDF
    In recent years, the interest in probabilistic real-time has grown, as a response to the limitations of traditional static Worst-Case Execution Time (WCET) methods, in performing timing analysis of applications running on complex systems, like multi/many-cores and COTS platforms. The probabilistic theory can partially solve this problem, but it requires strong guarantees on the execution time traces, in order to provide safe probabilistic-WCET estimations. These requirements can be verified through suitable statistical tests, as described in this paper. In this work, we identify also challenges and problems of using statistical testing procedures in probabilistic real-time computing, proposing a unified test procedure based on a single index called Probabilistic Predictability Index (PPI). An experimental campaign has been carried out, considering both synthetic and realistic datasets, and the analysis of the impact of the Linux PREEMPT_RT patch on a modern complex platform as a use-case of the proposed index

    Extending a run-time resource management framework to support OpenCL and heterogeneous systems

    Get PDF
    From Mobile to High-Performance Computing (HPC) systems, performance and energy efficiency are becoming always more challenging requirements. In this regard, heterogeneous systems, made by a general-purpose processor and one or more hardware accelerators, are emerging as affordable solutions. However, the effective exploitation of such platforms requires specific programming languages, like for instance OpenCL, and suitable run-time software layers. This work illustrates the extension of a run-time resource management (RTRM) framework, to support the execution of OpenCL applications on systems featuring a multi-core CPU and multiple GPUs. Early results show how this solution leads to benefits both for the applications, in terms of performance, and for the system, in terms of resource utilization, i.e. load balancing and thermal leveling over the computing devices

    Test of mode-division multiplexing and demultiplexing in free-space with diffractive transformation optics

    Get PDF
    In recent years, mode-division multiplexing (MDM) has been proposed as a promising solution in order to increase the information capacity of optical networks both in free-space and in optical fiber transmission. Here we present the design, fabrication and test of diffractive optical elements for mode-division multiplexing based on optical transformations in the visible range. Diffractive optics have been fabricated by means of 3D high-resolution electron beam lithography on polymethylmethacrylate resist layer spun over a glass substrate. The same optical sequence was exploited both for input-mode multiplexing and for mode sorting after free-space propagation. Their high miniaturization level and efficiency make these optical devices ideal for integration into next-generation platforms for mode-division (de)multiplexing in telecom applications.Comment: 4 pages, 1 extended references page, 6 figures. arXiv admin note: substantial text overlap with arXiv:1610.0744

    The MIG Framework: Enabling Transparent Process Migration in Open MPI

    Get PDF
    This paper introduces the mig framework: an Open MPI extension to transparently support the migration of application processes, over different nodes of a distributed High-Performance Computing (HPC) system. The framework provides mechanism on top of which suitable resource managers can implement policies to react to hardware faults, address performance variability, improve resource utilization, perform a fine-grained load balancing and power thermal management. Compared to other state-of-the-art approaches, the mig framework does not require changes in the application code. Moreover, it is highly maintainable, since it is mainly a self-contained solution that has required a very few changes in other already existing Open MPI frameworks. Experimental results have shown that the proposed extension does not introduce significant overhead in the application execution, while the penalty due to performing a migration can be properly taken into account by a resource manager

    Risk factors and outcome among a large patient cohort with community-acquired acute hepatitis C in Italy

    Get PDF
    BACKGROUND: The epidemiology of acute hepatitis C has changed during the past decade in Western countries. Acute HCV infection has a high rate of chronicity, but it is unclear when patients with acute infection should be treated. METHODS: To evaluate current sources of hepatitis C virus (HCV) transmission in Italy and to assess the rate of and factors associated with chronic infection, we enrolled 214 consecutive patients with newly acquired hepatitis C during 1999-2004. The patients were from 12 health care centers throughout the country, and they were followed up for a mean (+/- SD) period of 14+/-15.8 months. Biochemical liver tests were performed, and HCV RNA levels were monitored. RESULTS: A total of 146 patients (68%) had symptomatic disease. The most common risk factors for acquiring hepatitis C that were reported were intravenous drug use and medical procedures. The proportion of subjects with spontaneous resolution of infection was 36%. The average timespan from disease onset to HCV RNA clearance was 71 days (range, 27-173 days). In fact, 58 (80%) of 73 patients with self-limiting hepatitis experienced HCV RNA clearance within 3 months of disease onset. Multiple logistic regression analyses showed that none of the variables considered (including asymptomatic disease) were associated with increased risk of developing chronic hepatitis C. CONCLUSIONS: These findings underscore the importance of medical procedures as risk factors in the current spread of HCV infection in Italy. Because nearly all patients with acute, self-limiting hepatitis C - both symptomatic and asymptomatic - have spontaneous viral clearance within 3 months of disease onset, it seems reasonable to start treatment after this time period ends to avoid costly and useless treatment

    The Misconception of Exponential Tail Upper-Bounding in Probabilistic Real-Time

    Get PDF
    Measurement-Based Probabilistic Timing Analysis, a probabilistic real-time computing method, is based on the Extreme Value Theory (EVT), a statistical theory applied to Worst-Case Execution Time analysis on real-time embedded systems. The output of the EVT theory is a statistical distribution, in the form of Generalized Extreme Value Distribution or Generalized Pareto Distribution. Their cumulative distribution function can asymptotically assume one of three possible forms: light, exponential or heavy tail. Recently, several works proposed to upper-bound the light-tail distributions with their exponential version. In this paper, we show that this assumption is valid only under certain conditions and that it is often misinterpreted. This leads to unsafe estimations of the worst-case execution time, which cannot be accepted in applications targeting safety critical embedded systems
    • …
    corecore